Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 22
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
medRxiv ; 2024 Mar 25.
Artículo en Inglés | MEDLINE | ID: mdl-38585914

RESUMEN

Background: Randomised controlled trials (RCTs) inform healthcare decisions. Unfortunately, some published RCTs contain false data, and some appear to have been entirely fabricated. Systematic reviews are performed to identify and synthesise all RCTs which have been conducted on a given topic. This means that any of these 'problematic studies' are likely to be included, but there are no agreed methods for identifying them. The INSPECT-SR project is developing a tool to identify problematic RCTs in systematic reviews of healthcare-related interventions. The tool will guide the user through a series of 'checks' to determine a study's authenticity. The first objective in the development process is to assemble a comprehensive list of checks to consider for inclusion. Methods: We assembled an initial list of checks for assessing the authenticity of research studies, with no restriction to RCTs, and categorised these into five domains: Inspecting results in the paper; Inspecting the research team; Inspecting conduct, governance, and transparency; Inspecting text and publication details; Inspecting the individual participant data. We implemented this list as an online survey, and invited people with expertise and experience of assessing potentially problematic studies to participate through professional networks and online forums. Participants were invited to provide feedback on the checks on the list, and were asked to describe any additional checks they knew of, which were not featured in the list. Results: Extensive feedback on an initial list of 102 checks was provided by 71 participants based in 16 countries across five continents. Fourteen new checks were proposed across the five domains, and suggestions were made to reword checks on the initial list. An updated list of checks was constructed, comprising 116 checks. Many participants expressed a lack of familiarity with statistical checks, and emphasized the importance of feasibility of the tool. Conclusions: A comprehensive list of trustworthiness checks has been produced. The checks will be evaluated to determine which should be included in the INSPECT-SR tool.

2.
BMJ Open ; 14(3): e084164, 2024 Mar 11.
Artículo en Inglés | MEDLINE | ID: mdl-38471680

RESUMEN

INTRODUCTION: Randomised controlled trials (RCTs) inform healthcare decisions. It is now apparent that some published RCTs contain false data and some appear to have been entirely fabricated. Systematic reviews are performed to identify and synthesise all RCTs that have been conducted on a given topic. While it is usual to assess methodological features of the RCTs in the process of undertaking a systematic review, it is not usual to consider whether the RCTs contain false data. Studies containing false data therefore go unnoticed and contribute to systematic review conclusions. The INveStigating ProblEmatic Clinical Trials in Systematic Reviews (INSPECT-SR) project will develop a tool to assess the trustworthiness of RCTs in systematic reviews of healthcare-related interventions. METHODS AND ANALYSIS: The INSPECT-SR tool will be developed using expert consensus in combination with empirical evidence, over five stages: (1) a survey of experts to assemble a comprehensive list of checks for detecting problematic RCTs, (2) an evaluation of the feasibility and impact of applying the checks to systematic reviews, (3) a Delphi survey to determine which of the checks are supported by expert consensus, culminating in, (4) a consensus meeting to select checks to be included in a draft tool and to determine its format and (5) prospective testing of the draft tool in the production of new health systematic reviews, to allow refinement based on user feedback. We anticipate that the INSPECT-SR tool will help researchers to identify problematic studies and will help patients by protecting them from the influence of false data on their healthcare. ETHICS AND DISSEMINATION: The University of Manchester ethics decision tool was used, and this returned the result that ethical approval was not required for this project (30 September 2022), which incorporates secondary research and surveys of professionals about subjects relating to their expertise. Informed consent will be obtained from all survey participants. All results will be published as open-access articles. The final tool will be made freely available.


Asunto(s)
Medicina Basada en la Evidencia , Proyectos de Investigación , Humanos , Consenso , Medicina Basada en la Evidencia/métodos , Consentimiento Informado , Ensayos Clínicos Controlados Aleatorios como Asunto , Revisiones Sistemáticas como Asunto
3.
medRxiv ; 2023 Nov 13.
Artículo en Inglés | MEDLINE | ID: mdl-37873409

RESUMEN

Introduction: Randomised controlled trials (RCTs) inform healthcare decisions. It is now apparent that some published RCTs contain false data and some appear to have been entirely fabricated. Systematic reviews are performed to identify and synthesise all RCTs that have been conducted on a given topic. While it is usual to assess methodological features of the RCTs in the process of undertaking a systematic review, it is not usual to consider whether the RCTs contain false data. Studies containing false data therefore go unnoticed and contribute to systematic review conclusions. The INSPECT-SR project will develop a tool to assess the trustworthiness of RCTs in systematic reviews of healthcare related interventions. Methods and analysis: The INSPECT-SR tool will be developed using expert consensus in combination with empirical evidence, over five stages: 1) a survey of experts to assemble a comprehensive list of checks for detecting problematic RCTs, 2) an evaluation of the feasibility and impact of applying the checks to systematic reviews, 3) a Delphi survey to determine which of the checks are supported by expert consensus, culminating in 4) a consensus meeting to select checks to be included in a draft tool and to determine its format, 5) prospective testing of the draft tool in the production of new health systematic reviews, to allow refinement based on user feedback. We anticipate that the INSPECT-SR tool will help researchers to identify problematic studies, and will help patients by protecting them from the influence of false data on their healthcare.

4.
Psychol Sci ; 34(4): 512-522, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-36730433

RESUMEN

In April 2019, Psychological Science published its first issue in which all Research Articles received the Open Data badge. We used that issue to investigate the effectiveness of this badge, focusing on the adherence to its aim at Psychological Science: sharing both data and code to ensure reproducibility of results. Twelve researchers of varying experience levels attempted to reproduce the results of the empirical articles in the target issue (at least three researchers per article). We found that all 14 articles provided at least some data and six provided analysis code, but only one article was rated to be exactly reproducible, and three were rated as essentially reproducible with minor deviations. We suggest that researchers should be encouraged to adhere to the higher standard in force at Psychological Science. Moreover, a check of reproducibility during peer review may be preferable to the disclosure method of awarding badges.


Asunto(s)
Políticas Editoriales , Publicaciones Periódicas como Asunto , Psicología , Humanos , Reproducibilidad de los Resultados , Investigación/normas , Difusión de la Información
5.
BMC Res Notes ; 15(1): 203, 2022 Jun 11.
Artículo en Inglés | MEDLINE | ID: mdl-35690782

RESUMEN

The rising rate of preprints and publications, combined with persistent inadequate reporting practices and problems with study design and execution, have strained the traditional peer review system. Automated screening tools could potentially enhance peer review by helping authors, journal editors, and reviewers to identify beneficial practices and common problems in preprints or submitted manuscripts. Tools can screen many papers quickly, and may be particularly helpful in assessing compliance with journal policies and with straightforward items in reporting guidelines. However, existing tools cannot understand or interpret the paper in the context of the scientific literature. Tools cannot yet determine whether the methods used are suitable to answer the research question, or whether the data support the authors' conclusions. Editors and peer reviewers are essential for assessing journal fit and the overall quality of a paper, including the experimental design, the soundness of the study's conclusions, potential impact and innovation. Automated screening tools cannot replace peer review, but may aid authors, reviewers, and editors in improving scientific papers. Strategies for responsible use of automated tools in peer review may include setting performance criteria for tools, transparently reporting tool performance and use, and training users to interpret reports.


Asunto(s)
Políticas Editoriales , Revisión de la Investigación por Pares , Proyectos de Investigación , Informe de Investigación
8.
Artículo en Inglés | MEDLINE | ID: mdl-31788009

RESUMEN

Tu et al. (Emerg Themes Epidemiol 5:2, 2008. https://doi.org/10.1186/1742-7622-5-2) asserted that suppression, Simpson's Paradox, and Lord's Paradox are all the same phenomenon-the reversal paradox. In the reversal paradox, the association between an outcome variable and an explanatory (predictor) variable is reversed when another explanatory variable is added to the analysis. More specifically, Tu et al. (2008) purported to demonstrate that these three paradoxes are different manifestations of the same phenomenon, differently named depending on the scaling of the outcome variable, the explanatory variable, and the third variable. According to Tu et al. (2008), when all three variables are continuous, the phenomenon is called suppression; when all three variables are categorical, the phenomenon is called Simpson's Paradox; and when the outcome variable and the third variable are continuous but the explanatory variable is categorical, the phenomenon is called Lord's Paradox. We show that (a) the strong form of Simpson's Paradox is equivalent to negative suppression for a 2 × 2 × 2 contingency table, (b) the weak form of Simpson's Paradox is equivalent to classical suppression for a 2 × 2 × 2 contingency table, and (c) Lord's Paradox is not the same phenomenon as suppression or Simpson's Paradox.

9.
PeerJ ; 6: e5656, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30258732

RESUMEN

We comment on Eichstaedt et al.'s (2015a) claim to have shown that language patterns among Twitter users, aggregated at the level of US counties, predicted county-level mortality rates from atherosclerotic heart disease (AHD), with "negative" language being associated with higher rates of death from AHD and "positive" language associated with lower rates. First, we examine some of Eichstaedt et al.'s apparent assumptions about the nature of AHD, as well as some issues related to the secondary analysis of online data and to considering counties as communities. Next, using the data files supplied by Eichstaedt et al., we reproduce their regression- and correlation-based models, substituting mortality from an alternative cause of death-namely, suicide-as the outcome variable, and observe that the purported associations between "negative" and "positive" language and mortality are reversed when suicide is used as the outcome variable. We identify numerous other conceptual and methodological limitations that call into question the robustness and generalizability of Eichstaedt et al.'s claims, even when these are based on the results of their ridge regression/machine learning model. We conclude that there is no good evidence that analyzing Twitter data in bulk in this way can add anything useful to our ability to understand geographical variation in AHD mortality rates.

10.
J Humanist Psychol ; 58(3): 239-261, 2018 May.
Artículo en Inglés | MEDLINE | ID: mdl-29706664

RESUMEN

An extraordinary claim was made by one of the leading researchers within positive psychology, namely, there is a universal-invariant ratio between positive to negative emotions that serves as a unique tipping point between flourishing and languishing in individuals, marriages, organizations, and other human systems across all cultures and times. Known as the "critical positivity ratio," this finding was supposedly derived from the famous Lorenz equation in physics by using the mathematics of nonlinear dynamic systems, and was defined precisely as "2.9013." This exact number was widely touted as a great discovery by many leaders of positive psychology, had tremendous impact in various applied areas of psychology, and, more broadly, and was extensively cited in both the scientific literature and in the global popular media. However, this finding has been demonstrated to be bogus. Since its advent as a relatively new subdiscipline, positive psychology has claimed superiority to its precursor, the subdiscipline of humanistic psychology, in terms of supposedly both using more rigorous science and avoiding popularizing nonsense. The debunking of the critical positivity ratio demonstrates that positive psychology did not live up to these claims, and this has important implications, which are discussed in terms of "romantic scientism" and "voodoo science." In addition, articles in the special issue on the "Implications of Debunking the 'Critical Positivity Ratio' for Humanistic Psychology" are introduced, as they also delve into these concerns.

11.
Curr Biol ; 28(10): R594-R596, 2018 05 21.
Artículo en Inglés | MEDLINE | ID: mdl-29787718

RESUMEN

Reid et al.[1] analysed data from 39 third-trimester fetuses, concluding that they showed a preferential head-orienting reaction towards lights projected through the uterine wall in a face-like arrangement, as opposed to an inverted triangle of dots. These results imply not only that assessment of visual-perceptive responses is possible in prenatal subjects, but also that a measurable preference for faces exists before birth. However, we have identified three substantial problems with Reid et al.'s [1] method and analyses, which we outline here.


Asunto(s)
Feto , Percepción Visual , Femenino , Humanos , Embarazo , Tercer Trimestre del Embarazo
12.
J Exp Psychol Gen ; 146(9): 1372-1377, 2017 09.
Artículo en Inglés | MEDLINE | ID: mdl-28846007

RESUMEN

This article examines the concept of emodiversity, put forward by Quoidbach et al. (2014) as a novel source of information about "the health of the human emotional ecosystem" (p. 2057). Quoidbach et al. drew an analogy between emodiversity as a desirable property of a person's emotional make-up and biological diversity as a desirable property of an ecosystem. They claimed that emodiversity was an independent predictor of better mental and physical health outcomes in two large-scale studies. Here, we show that Quoidbach et al.'s construct of emodiversity suffers from several theoretical and practical deficiencies, which make these authors' use of Shannon's (1948) entropy formula to measure emodiversity highly questionable. Our reanalysis of Quoidbach et al.'s two studies shows that the apparently substantial effects that these authors reported are likely due to a failure to conduct appropriate hierarchical regression in one case and to suppression effects in the other. It appears that Quoidbach et al.'s claims about emodiversity may reduce to little more than a set of computational and statistical artifacts. (PsycINFO Database Record


Asunto(s)
Artefactos , Emociones , Ecosistema , Humanos
13.
Nature ; 546(7660): E6-E7, 2017 06 28.
Artículo en Inglés | MEDLINE | ID: mdl-28658214
14.
BMC Nutr ; 3: 54, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-32153834

RESUMEN

BACKGROUND: We present the results of a reanalysis of four articles from the Cornell Food and Brand Lab based on data collected from diners at an Italian restaurant buffet. METHOD: We calculated whether the means, standard deviations, and test statistics were compatible with the sample size. Test statistics and p values were recalculated. We also applied deductive logic to see whether the claims made in each article were compatible with the claims made in the others. We have so far been unable to obtain the data from the authors of the four articles. RESULTS: A thorough reading of the articles and careful reanalysis of the results revealed a wide range of problems. The sample sizes for the number of diners in each condition are incongruous both within and between the four articles. In some cases, the degrees of freedom of between-participant test statistics are larger than the sample size, which is impossible. Many of the computed F and t statistics are inconsistent with the reported means and standard deviations. In some cases, the number of possible inconsistencies for a single statistic was such that we were unable to determine which of the components of that statistic were incorrect. Our Appendix reports approximately 150 inconsistencies in these four articles, which we were able to identify from the reported statistics alone. CONCLUSIONS: We hope that our analysis will encourage readers, using and extending the simple methods that we describe, to undertake their own efforts to verify published results, and that such initiatives will improve the accuracy and reproducibility of the scientific literature. We also anticipate that the editors of the journals that published these four articles may wish to consider whether any corrective action is required.

15.
F1000Res ; 5: 1778, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27606051

RESUMEN

In their 2015 paper, Thorstenson, Pazda, and Elliot offered evidence from two experiments that perception of colors on the blue-yellow axis was impaired if the participants had watched a sad movie clip, compared to participants who watched clips designed to induce a happy or neutral mood. Subsequently, these authors retracted their article, citing a mistake in their statistical analyses and a problem with the data in one of their experiments. Here, we discuss a number of other methodological problems with Thorstenson et al.'s experimental design, and also demonstrate that the problems with the data go beyond what these authors reported. We conclude that repeating one of the two experiments, with the minor revisions proposed by Thorstenson et al., will not be sufficient to address the problems with this work.

16.
PLoS One ; 11(6): e0156415, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27270924

RESUMEN

We critically re-examine Fredrickson et al.'s renewed claims concerning the differential relationship between hedonic and eudaimonic forms of well-being and gene expression, namely that people who experience a preponderance of eudaimonic well-being have gene expression profiles that are associated with more favorable health outcomes. By means of an extensive reanalysis of their data, we identify several discrepancies between what these authors claimed and what their data support; we further show that their different analysis models produce mutually contradictory results. We then show how Fredrickson et al.'s most recent article on this topic not only fails to adequately address our previously published concerns about their earlier related work, but also introduces significant further problems, including inconsistency in their hypotheses. Additionally, we demonstrate that regardless of which statistical model is used to analyze their data, Fredrickson et al.'s method can be highly sensitive to the inclusion (or exclusion) of data from a single subject. We reiterate our previous conclusions, namely that there is no evidence that Fredrickson et al. have established a reliable empirical distinction between their two delineated forms of well-being, nor that eudaimonic well-being provides any overall health benefits over hedonic well-being.


Asunto(s)
Regulación de la Expresión Génica , Genómica/métodos , Humanos
17.
Am Psychol ; 70(6): 571-3, 2015 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-26348345

RESUMEN

Comments on the original article "Life is pretty meaningful," by S. J. Heintzelman and L. A. King (see record 2014-03265-001). Heintzelman and King argued that meaning in life (MIL) is widely experienced and exists at high levels. In this brief commentary, the current authors examine what they believe are several flaws in their argument: a lack of clarity in defining MIL; the questionable validity of the instruments used to measure MIL throughout Heintzelman and King's article; and an erroneous interpretation of quantitative reports of MIL from surveys and the academic literature.


Asunto(s)
Vida , Motivación , Satisfacción Personal , Humanos
19.
Am Psychol ; 69(6): 629-32, 2014 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-25197848

RESUMEN

Comments on the article by Fredrickson and Losada (see record 2005-11834-001). Recently the current authors (Brown, Sokal, & Friedman, 2013) debunked the widely cited claim made by Fredrickson and Losada (2005) that their use of a mathematical model drawn from nonlinear dynamics (namely, the Lorenz equations from fluid dynamics) provided theoretical support for the existence of a pair of critical positivity-ratio values (2.9013 and 11.6346) such that individuals whose ratios fall between these values will "flourish," whereas people whose ratios lie outside this ideal range will "languish." For lack of space in our previous article, we refrained from addressing, except in passing, the question of whether there might be empirical evidence for the existence of one or more critical positivity ratios ("tipping points"). In response to our critique, Fredrickson and Losada (2013) withdrew their nonlinear dynamics model, but Fredrickson (December December 2013) reaffirmed some claims concerning positivity ratios on the basis of empirical studies. We would therefore like to comment briefly on these claims and the alleged supporting evidence.


Asunto(s)
Afecto , Salud Mental , Modelos Psicológicos , Femenino , Humanos , Masculino
20.
Am Psychol ; 69(6): 636-7, 2014 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-25197852

RESUMEN

Replies to the comments of Nickerson (see record 2014-36500-010), Guastello (see record 2014-36500-011), Musau (see record 2014-36500-013), Hämäläinen et al. (see record 2014-36500-014), and Lefebvre and Schwartz (see record 2014-36500-015) on the authors article (see record 2013-24609-001). Fredrickson and Losada's (2005) article was the subject of over 350 scholarly citations before our critique (Brown et al., 2013) appeared, and its principal "conclusions" have been featured in many lectures and public presentations by senior members of the positive psychology research community, although its deficiencies ought to have been visible to anyone with a modest grasp of mathematics and a little curiosity. Unfortunately- because human behavior is, after all, complex and difficult to understand-we have no way of knowing whether the fact that it took so long for these deficiencies to be recognized was due to an unwarranted degree of optimism about the reliability of the peer-review process, a reluctance to make waves in the face of powerful interests, a general lack of critical thinking within positive psychology, or some other factor. We hope that our revelation of the problems with the critical positivity ratio ultimately demonstrates the success of science as a self-correcting endeavor; however, we would have greatly preferred it if our work had not been necessary in the first place.


Asunto(s)
Afecto , Salud Mental , Modelos Psicológicos , Femenino , Humanos , Masculino
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...